75 research outputs found
DeepSoft: A vision for a deep model of software
Although software analytics has experienced rapid growth as a research area,
it has not yet reached its full potential for wide industrial adoption. Most of
the existing work in software analytics still relies heavily on costly manual
feature engineering processes, and they mainly address the traditional
classification problems, as opposed to predicting future events. We present a
vision for \emph{DeepSoft}, an \emph{end-to-end} generic framework for modeling
software and its development process to predict future risks and recommend
interventions. DeepSoft, partly inspired by human memory, is built upon the
powerful deep learning-based Long Short Term Memory architecture that is
capable of learning long-term temporal dependencies that occur in software
evolution. Such deep learned patterns of software can be used to address a
range of challenging problems such as code and task recommendation and
prediction. DeepSoft provides a new approach for research into modeling of
source code, risk prediction and mitigation, developer modeling, and
automatically generating code patches from bug reports.Comment: FSE 201
An agent-oriented approach to change propagation in software evolution
Software maintenance and evolution are inevitable activities since almost all software that is useful and successful stimulates user-generated requests for change and improvements. One of the most critical problems in software maintenance and evolution is to maintain consistency between software artefacts by propagating changes correctly. Although many approaches have been proposed, automated change propagation is still a significant technical challenge in software engineering. In this paper we present a novel, agent-oriented approach to deal with change propagation in evolving software systems that are developed using the Prometheus methodology. A metamodel with a set of the Object Constraint Language (OCL) rules forms the basis of the proposed framework. The underlying change propagation mechanism of our framework is based on the well-known Belief-Desire-Intention (BDI) agent architecture. Traceability information and design heuristics are also incorporated into the framework to facilitate the change propagation process
On Business Services Representation – The 3 x 3 x 3 Approach
The increasing popularity and influence of service-oriented computing give rise to the need of representational and methodological supports for the development and management of business services. From an IT perspective, there is a proliferation of methods and languages for representing Web services. Unfortunately, there has not been much work in modeling high-level services from a business perspective. Modeling business services should arguably capture their inherent features, along with many other representational artifacts. We propose a novel approach for business services representation featuring a three-dimensional representational space of which dimensions stand for the service consumer, service provider and service context. We also discuss how the proposed representation approach provides methodological supports to the area of service orientation. Finally, we present in-progress work on the application of our approach
Adversarial Patch Generation for Automatic Program Repair
Automatic program repair (APR) has seen a growing interest in recent years
with numerous techniques proposed. One notable line of research work in APR is
search-based techniques which generate repair candidates via syntactic analyses
and search for valid repairs in the generated search space. In this work, we
explore an alternative approach which is inspired by the adversarial notion of
bugs and repairs. Our approach leverages the deep learning Generative
Adversarial Networks (GANs) architecture to suggest repairs that are as close
as possible to human generated repairs. Preliminary evaluations demonstrate
promising results of our approach (generating repairs exactly the same as human
fixes for 21.2% of 500 bugs).Comment: Submitted to IEEE Software's special issue on Automatic Program
Repair. Added reference
A Taxonomy for Mining and Classifying Privacy Requirements in Issue Reports
Digital and physical footprints are a trail of user activities collected over
the use of software applications and systems. As software becomes ubiquitous,
protecting user privacy has become challenging. With the increasing of user
privacy awareness and advent of privacy regulations and policies, there is an
emerging need to implement software systems that enhance the protection of
personal data processing. However, existing privacy regulations and policies
only provide high-level principles which are difficult for software engineers
to design and implement privacy-aware systems. In this paper, we develop a
taxonomy that provides a comprehensive set of privacy requirements based on two
well-established and widely-adopted privacy regulations and frameworks, the
General Data Protection Regulation (GDPR) and the ISO/IEC 29100. These
requirements are refined into a level that is implementable and easy to
understand by software engineers, thus supporting them to attend to existing
regulations and standards. We have also performed a study on how two large
open-source software projects (Google Chrome and Moodle) address the privacy
requirements in our taxonomy through mining their issue reports. The paper
discusses how the collected issues were classified, and presents the findings
and insights generated from our study.Comment: Submitted to IEEE Transactions on Software Engineering on 23 December
202
DeepJIT: an end-to-end deep learning framework for just-in-time defect prediction
National Research Foundation (NRF) Singapor
Quantum Software Analytics: Opportunities and Challenges
Quantum computing systems depend on the principles of quantum mechanics to
perform multiple challenging tasks more efficiently than their classical
counterparts. In classical software engineering, the software life cycle is
used to document and structure the processes of design, implementation, and
maintenance of software applications. It helps stakeholders understand how to
build an application. In this paper, we summarize a set of software analytics
topics and techniques in the development life cycle that can be leveraged and
integrated into quantum software application development. The results of this
work can assist researchers and practitioners in better understanding the
quantum-specific emerging development activities, challenges, and opportunities
in the next generation of quantum software
Safety and efficacy of fluoxetine on functional outcome after acute stroke (AFFINITY): a randomised, double-blind, placebo-controlled trial
Background
Trials of fluoxetine for recovery after stroke report conflicting results. The Assessment oF FluoxetINe In sTroke recoverY (AFFINITY) trial aimed to show if daily oral fluoxetine for 6 months after stroke improves functional outcome in an ethnically diverse population.
Methods
AFFINITY was a randomised, parallel-group, double-blind, placebo-controlled trial done in 43 hospital stroke units in Australia (n=29), New Zealand (four), and Vietnam (ten). Eligible patients were adults (aged ≥18 years) with a clinical diagnosis of acute stroke in the previous 2–15 days, brain imaging consistent with ischaemic or haemorrhagic stroke, and a persisting neurological deficit that produced a modified Rankin Scale (mRS) score of 1 or more. Patients were randomly assigned 1:1 via a web-based system using a minimisation algorithm to once daily, oral fluoxetine 20 mg capsules or matching placebo for 6 months. Patients, carers, investigators, and outcome assessors were masked to the treatment allocation. The primary outcome was functional status, measured by the mRS, at 6 months. The primary analysis was an ordinal logistic regression of the mRS at 6 months, adjusted for minimisation variables. Primary and safety analyses were done according to the patient's treatment allocation. The trial is registered with the Australian New Zealand Clinical Trials Registry, ACTRN12611000774921.
Findings
Between Jan 11, 2013, and June 30, 2019, 1280 patients were recruited in Australia (n=532), New Zealand (n=42), and Vietnam (n=706), of whom 642 were randomly assigned to fluoxetine and 638 were randomly assigned to placebo. Mean duration of trial treatment was 167 days (SD 48·1). At 6 months, mRS data were available in 624 (97%) patients in the fluoxetine group and 632 (99%) in the placebo group. The distribution of mRS categories was similar in the fluoxetine and placebo groups (adjusted common odds ratio 0·94, 95% CI 0·76–1·15; p=0·53). Compared with patients in the placebo group, patients in the fluoxetine group had more falls (20 [3%] vs seven [1%]; p=0·018), bone fractures (19 [3%] vs six [1%]; p=0·014), and epileptic seizures (ten [2%] vs two [<1%]; p=0·038) at 6 months.
Interpretation
Oral fluoxetine 20 mg daily for 6 months after acute stroke did not improve functional outcome and increased the risk of falls, bone fractures, and epileptic seizures. These results do not support the use of fluoxetine to improve functional outcome after stroke
- …